93 research outputs found

    Automatic modelling of 3D trees using aerial LIDAR point cloud data and deep learning

    Get PDF
    3D tree objects can be used in various applications, like estimation of physiological equivalent temperature (PET). During this project, a method is designed to extract 3D tree objects from a country-wide point cloud. To apply this method on large scale, the algorithm needs to be efficient. Extraction of trees is done in two steps: point-wise classification using the PointNet deep learning network, and Watershed segmentation to split points into individual trees. After that, 3D tree models are made. The method is evaluated on 3 areas, a park, city center and housing block in the city of Deventer, the Netherlands. This resulted into an average accuracy of 92% and a F1-score of 0.96

    An Automatic Procedure For Mobile Laser Scanning Platform 6DOF Trajectory Adjustment

    Get PDF
    In this paper, a method is presented to improve the MLS platform’s trajectory for GNSS denied areas. The method comprises two major steps. The first step is based on a 2D image registration technique described in our previous publication. Internally, this registration technique first performs aerial to aerial image matching, this issues correspondences which enable to compute the 3D tie points by multiview triangulation. Similarly, it registers the rasterized Mobile Laser Scanning Point Cloud (MLSPC) patches with the multiple related aerial image patches. The later registration provides the correspondence between the aerial to aerial tie points and the MLSPC’s 3D points. In the second step, which is described in this paper, a procedure utilizes three kinds of observations to improve the MLS platform’s trajectory. The first type of observation is the set of 3D tie points computed automatically in the previous step (and are already available), the second type of observation is based on IMU readings and the third type of observation is soft-constraint over related pose parameters. In this situation, the 3D tie points are considered accurate and precise observations, since they provide both locally and globally strict constraints, whereas the IMU observations and soft-constraints only provide locally precise constraints. For 6DOF trajectory representation, first, the pose [R, t] parameters are converted to 6 B-spline functions over time. Then for the trajectory adjustment, the coefficients of B-splines are updated from the established observations. We tested our method on an MLS data set acquired at a test area in Rotterdam, and verified the trajectory improvement by evaluation with independently and manually measured GCPs. After the adjustment, the trajectory has achieved the accuracy of RMSE X = 9 cm, Y = 14 cm and Z = 14 cm. Analysing the error in the updated trajectory suggests that our procedure is effective at adjusting the 6DOF trajectory and to regenerate a reliable MLSPC product

    Automatic extraction of vertical walls from mobile and airborne laser scanning data

    Get PDF

    Automatic 3D building model generation using deep learning methods based on cityjson and 2D floor plans

    Get PDF
    In the past decade, a lot of effort is put into applying digital innovations to building life cycles. 3D Models have been proven to be efficient for decision making, scenario simulation and 3D data analysis during this life cycle. Creating such digital representation of a building can be a labour-intensive task, depending on the desired scale and level of detail (LOD). This research aims at creating a new automatic deep learning based method for building model reconstruction. It combines exterior and interior data sources: 1) 3D BAG, 2) archived floor plan images. To reconstruct 3D building models from the two data sources, an innovative combination of methods is proposed. In order to obtain the information needed from the floor plan images (walls, openings and labels), deep learning techniques have been used. In addition, post-processing techniques are introduced to transform the data in the required format. In order to fuse the extracted 2D data and the 3D exterior, a data fusion process is introduced. From the literature review, no prior research on automatic integration of CityGML/JSON and floor plan images has been found. Therefore, this method is a first approach to this data integration

    Vision-based indoor localization via a visual slam approach

    Get PDF
    With an increasing interest in indoor location based services, vision-based indoor localization techniques have attracted many attentions from both academia and industry. Inspired by the development of simultaneous localization and mapping technique (SLAM), we present a visual SLAM-based approach to achieve a 6 degrees of freedom (DoF) pose in indoor environment. Firstly, the indoor scene is explored by a keyframe-based global mapping technique, which generates a database from a sequence of images covering the entire scene. After the exploration, a feature vocabulary tree is trained for accelerating feature matching in the image retrieval phase, and the spatial structures obtained from the keyframes are stored. Instead of querying by a single image, a short sequence of images in the query site are used to extract both features and their relative poses, which is a local visual SLAM procedure. The relative poses of query images provide a pose graph-based geometric constraint which is used to assess the validity of image retrieval results. The final positioning result is obtained by selecting the pose of the first correct corresponding image. © Authors 2019

    Context-Based Filtering of Noisy Labels for Automatic Basemap Updating From UAV Data

    Get PDF
    Unmanned aerial vehicles (UAVs) have the potential to obtain high-resolution aerial imagery at frequent intervals, making them a valuable tool for urban planners who require up-to-date basemaps. Supervised classification methods can be exploited to translate the UAV data into such basemaps. However, these methods require labeled training samples, the collection of which may be complex and time consuming. Existing spatial datasets can be exploited to provide the training labels, but these often contain errors due to differences in the date or resolution of the dataset from which these outdated labels were obtained. In this paper, we propose an approach for updating basemaps using global and local contextual cues to automatically remove unreliable samples from the training set, and thereby, improve the classification accuracy. Using UAV datasets over Kigali, Rwanda, and Dar es Salaam, Tanzania, we demonstrate how the amount of mislabeled training samples can be reduced by 44.1% and 35.5%, respectively, leading to a classification accuracy of 92.1% in Kigali and 91.3% in Dar es Salaam. To achieve the same accuracy in Dar es Salaam, between 50000 and 60000 manually labeled image segments would be needed. This demonstrates that the proposed approach of using outdated spatial data to provide labels and iteratively removing unreliable samples is a viable method for obtaining high classification accuracies while reducing the costly step of acquiring labeled training samples

    Towards better classification of land cover and land use based on convolutional neural networks

    Get PDF
    Land use and land cover are two important variables in remote sensing. Commonly, the information of land use is stored in geospatial databases. In order to update such databases, we present a new approach to determine the land cover and to classify land use objects using convolutional neural networks (CNN). High-resolution aerial images and derived data such as digital surface models serve as input. An encoder-decoder based CNN is used for land cover classification. We found a composite including the infrared band and height data to outperform RGB images in land cover classification. We also propose a CNN-based methodology for the prediction of land use label from the geospatial databases, where we use masks representing object shape, the RGB images and the pixel-wise class scores of land cover as input. For this task, we developed a two-branch network where the first branch considers the whole area of an image, while the second branch focuses on a smaller relevant area. We evaluated our methods using two sites and achieved an overall accuracy of up to 89.6% and 81.7% for land cover and land use, respectively. We also tested our methods for land cover classification using the Vaihingen dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 90.7%. © Authors 2019
    • …
    corecore